Members
Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Homogenization

Participants : Michael Bertin, Ludovic Chamoin, Virginie Ehrlacher, Thomas Hudson, Marc Josien, Claude Le Bris, Frédéric Legoll, Simon Lemaire, François Madiot, William Minvielle.

Deterministic non periodic systems

The homogenization of (deterministic) non periodic systems is a well known topic. Although well explored theoretically by many authors, it has been less investigated from the standpoint of numerical approaches (except in the random setting). In collaboration with X. Blanc (Paris 7) and P.-L. Lions (Collège de France), C. Le Bris has introduced a possible theory, giving rise to a numerical approach, for the simulation of multiscale nonperiodic systems. The theoretical considerations are based on earlier works by the same authors (derivation of an algebra of functions appropriate to formalize a theory of homogenization). The numerical endeavor is completely new. The theoretical results obtained to date are being collected in a series of manuscripts that will be available shortly. The publications [30] and [10] specifically address the issues related to a local perturbation of the periodic problem and the challenging, practically relevant problem of interfaces between periodic structures of different nature (the celebrated "twin boundaries" problem in materials science). Some related problems will now be addressed in the context of the PhD thesis of M. Josien.

Stochastic homogenization

The project-team has pursued its efforts in the field of stochastic homogenization of elliptic equations, aiming at designing numerical approaches that both are practically relevant and keep the computational workload limited.

Using the standard homogenization theory, one knows that the homogenized tensor, which is a deterministic matrix, depends on the solution of a stochastic equation, the so-called corrector problem, which is posed on the whole space d. This equation is therefore delicate and expensive to solve. In practice, the space d is truncated to some bounded domain, on which the corrector problem is numerically solved. In turn, this yields a converging approximation of the homogenized tensor, which happens to be a random matrix.

In [47] , C. Le Bris, F. Legoll and W. Minvielle have investigated the possibility to use a variance reduction technique based on computing the corrector equation only for selected environments. These environments are chosen based on the fact that their statistics in the finite supercell matches the statistics of the materials in the infinite supercell. This method yields an approximation of the homogenized matrix with an error smaller than standard approximations. The efficiency of the approach has been demonstrated for various types of random materials, including composite materials with randomly located inclusions.

In addition, M. Bertin and F. Legoll, in collaboration with S. Brisard (École des Ponts), have investigated the possibility to use the Hashin-Shtrikman bounds as control variables in a control variate approach. The Hashin-Shtrikman bounds are often used in the computational mechanics community as approximations of the homogenized quantities. Our aim is use them to improve the efficiency of the reference computations, somewhat in the spirit of a preconditionner. Preliminary encouraging numerical results have been obtained.

Over the past years, the project-team has proposed several variance reduction techniques, see e.g. [21] for a method using antithetic variables (in a nonlinear context) and [20] for a control variate approach using a surrogate model based on a defect-type theory. These various approaches have been reviewed and compared to one another in [29] .

In collaboration with B. Stamm (Paris 6), E. Cancès, V. Ehrlacher and F. Legoll have proposed in [13] a new approach to approximate the homogenized coefficients of a random stationary material. This method is an alternative to that proposed e.g. by A. Bourgeat and A. Piatniski in [Approximations of effective coefficients in stochastic homogenization, Annales de l'Institut Henri Poincaré 40, 2004] which consists in solving a corrector problem on a bounded domain. The method introduced in [13] is based on a new corrector problem, which is posed on the entire space, but which is simpler than the standard corrector problem in that the coefficients of the equation are uniform outside some ball of finite radius. This implies that, in some cases (including the case of randomly located spherical inclusions), this new corrector problem can be recast as an integral equation posed on the surface of the inclusions. The problem can then be efficiently solved via domain decomposition and using spherical harmonics.

Multiscale Finite Element approaches

From a numerical point of view, the Multiscale Finite Element Method (MsFEM) is a classical strategy to address the situation when the homogenized problem is not known (e.g. in difficult nonlinear cases), or when the scale of the heterogeneities, although small, is not considered to be zero (and hence the homogenized problem cannot be considered as an accurate enough approximation).

The MsFEM has been introduced more than 10 years ago. However, even in simple deterministic cases, there are still some open questions, for instance concerning multiscale advection-diffusion equations. Such problems are possibly advection dominated and a stabilization procedure is therefore required. How stabilization interacts with the multiscale character of the equation is an unsolved mathematical question worth considering for numerical purposes. In that spirit, C. Le Bris, F. Legoll and F. Madiot have studied in [46] several variants of the Multiscale Finite Element Method (MsFEM), specifically designed to address multiscale advection-diffusion problems in the convection-dominated regime. Generally speaking, the idea of the MsFEM is to perform a Galerkin approximation of the problem using specific basis functions, that are precomputed (in an offline stage) and adapted to the problem considered. Several possibilities for the basis functions have been examined (for instance, they may or may not encode the convection field). Depending on how basis functions are defined, stabilization techniques (such as SUPG) may be required. Another option to handle such problems is to use a splitting approach, with two legacy codes, one solving a purely diffusive multiscale equation, the other one solving a single scale, convection-dominated advection-diffusion equation. In [46] , these various approaches have been compared in terms of accuracy and computational costs.

In the context of the PhD thesis of F. Madiot, current efforts are focused on the study of an advection-diffusion equation with a dominating convection in a perforated domain. The multiscale character of the problem here stems from the geometry of the domain. A paramount difference with the case considered in [46] is that boundary layers may appear throughout the domain (i.e. in the neighborhood of each perforation). The accuracy of the numerical approaches in the boundary layers thus becomes critical.

Most of the numerical analysis studies of the MsFEM are focused on obtaining a priori error bounds. In collaboration with L. Chamoin, who is currently in delegation in the project-team for the second year (from ENS Cachan, since September 2014), members of the project-team have been working on a posteriori error analysis for MsFEM approaches, with the aim to develop error estimation and adaptation tools. They have extended to the MsFEM case an approach that is classical in the computational mechanics community for single scale problems, and which is based on the so-called Constitutive Relation Error (CRE). Once a numerical solution uh has been obtained, the approach needs additional computations in order to determine a divergence-free field as close as possible to the exact flux ku. In the context of the MsFEM, it is important to be able to do all the expensive computations in an offline stage, independently of the right-hand side. The standard CRE approach thus needs to be adapted to that context, in order to keep that feature that makes it adapted to a multiscale, multi-query context. The proposed approach yields very interesting results, and provide an accurate and robust estimation of the global error.

Current efforts are targeted towards the design of adaptive algorithms for specific quantities of interest (in the so-called “goal-oriented” setting), and the design of model reduction approaches (such as the Proper Generalized Decomposition, or PGD) in the specific context of multiscale problems.

Coarse approximation of an elliptic problem with oscillatory coefficients

Still another question investigated in the project-team is to find an alternative to standard homogenization techniques when the latter are difficult to use in practice. Consider a linear elliptic equation, say in divergence form, with a highly oscillatory matrix coefficient, and assume that this problem is to be solved for a large number of right-hand sides. If the coefficient oscillations are infinitely rapid, the solution can be accurately approximated by the solution to the homogenized problem, where the homogenized coefficient has been evaluated beforehand by solving the corrector problem. If the oscillations are moderately rapid, one can think instead of MsFEM-type approaches to approximate the solution to the reference problem. However, in both cases, the complete knowledge of the oscillatory matrix coefficient is required, either to build the average model or to compute the multiscale basis. In many practical cases, this coefficient is often only partially known, or merely completely unavailable, and one only has access to the solution of the equation for some loadings. This observation has led to think about alternative methods, in the following spirit. Is it possible to approximate the reference solution by the solution to a problem with a constant matrix coefficient? How can this “best” constant matrix approximating the oscillatory problem be constructed in an efficient manner?

A preliminary step, following discussion and interaction with A. Cohen (Paris 6), has been to cast the problem as a convex optimization problem. We have then shown that the “best” constant matrix defined as the solution of that problem converges to the homogenized matrix in the limit of infinitely rapidly oscillatory coefficients. Furthermore, the optimization problem being convex, it can be efficiently solved using standard algorithms. C. Le Bris, F. Legoll and S. Lemaire have comprehensively explored that problem. The algorithm can be made very efficient, and it yields accurate approximation of the homogenized matrix. We have also shown that it is possible to construct, in a second stage, approximations to the correctors, in order to recover an approximation of the gradient of the solution.

Optimization of a material microstructure

A project involving V. Ehrlacher and F. Legoll, in collaboration with G. Leugering and M. Stingl (Cluster of Excellence, Erlangen-Nuremberg University), aims at optimizing the shape of some materials (modeled as structurally graded linear elastic materials) in order to achieve the best mechanical response at the minimal cost. As is often the case in shape optimization, the solution tends to be highly oscillatory, hence the need for homogenization techniques. Materials under consideration are being thought of as microstructured materials composed of steel and void and whose microstructure patterns are constructed as the macroscopic deformation of a reference periodic microstructure. The optimal material (i.e. the best macroscopic deformation) is the deformation achieving the best mechanical response.

For a given deformation, one can first compute the mechanical response using a homogenized model. This is the first variant that has been followed. Model reduction techniques are then required, in order to expedite the resolution of the corrector problem needed to identify the homogenized coefficient at each loop of the optimization algorithm. In that context, a PGD-type approach has been proposed.

A second variant is to compute the mechanical response at the microscale, using the highly oscillatory model. Preliminary results have been obtained. Current efforts are focused towards choosing an appropriate model reduction strategy.

Discrete systems and their thermodynamic limit

We conclude this section by describing works of the project-team on discrete models with highly oscillatory coefficients.

Dislocations are geometric line defects which interact via long-range stress fields in crystalline solids. In [45] , T. Hudson has studied the thermally-driven motion of dislocations in a discrete Monte Carlo model, showing that over long observation times at low temperature in a large body, the most probable trajectory of straight dislocation lines lie close to the solution of an explicit deterministic evolution equation.

Another work is related to the understanding of the origin of hysteresis in rubber-made materials. When submitted to cyclic deformations, the strain-stress curve of these materials indeed shows a hysteresis behavior, which seems to be independent of the speed of loading. Some years ago, members of the project-team have suggested a model, at a mesoscale, to explain this behavior. This model was written in terms of a system made of a finite number of particles. F. Legoll, T. Lelièvre and T. Hudson are currently studying whether a thermodynamic limit of the model previously proposed can be identified. In order to simplify the setting, the reference discrete model has been replaced by a continuum model with highly oscillatory coefficients. This model is nonlinear and time-dependent. The question is now to identify (e.g. using two-scale convergence arguments) its homogenized limit, first in a periodic setting, second in a stochastic setting.